.com
Hosted by:
Unit testing expertise at your fingertips!
Home | Discuss | Lists

Persistent Fixture Management

The book has now been published and the content of this chapter has likely changed substanstially.

About This Chapter

In the Transient Fixture Management narrative, we looked at how we can go about building in-memory Fresh Fixtures (page X). We noted that managing Fresh Fixtures is more complicated when the fixture is persisted either by the system under test (SUT) or the test itself. In this chapter I introduce the additional patterns required to manage persistent fixtures including Persistent Fresh Fixtures (see Fresh Fixture) as well as Shared Fixtures (page X).

Managing Persistent Fresh Fixtures



Sketch Persistent Fresh Fixture embedded from Persistent Fresh Fixture.gif

Fig. X: A Fresh Fixture can be either transient or persistent.

We can apply a Fresh Fixture strategy even if the test fixture is naturally persistent but we must have a way to tear it down after each test.

The term Persistent Fresh Fixture might sound like an oxymoron but it is actually not as large a contradiction as it might first seem. The Fresh Fixture strategy relies on each run of each Test Method (page X) using a newly created fixture. The name speaks to its intent: we do not reuse the fixture!. It does not need to imply that the fixture is transient only that it is not reused. Persistent Fresh Fixtures present several challenges not encountered with non-persistent once. Slow Tests (page X) is one of them and we will discuss that in more detail when we talk about Shared Fixtures. Unrepeatable Test (see Erratic Test on page X) is the challenge I will focus on here.

What Makes Fixtures Persistent?

A fixture, fresh or otherwise, can become persistent for one of two reasons. The first reason is when the SUT is a stateful object and it "remembers" how it was used in the past. This most often occurs when the SUT includes a database but it can occur simply because the SUT uses class variables to hold some of its data. The second reason is when the Testcase Class (page X) holds a reference to anotherwise temporary Fresh Fixture in a way that makes it survive across Test Method invocations.

Some members of the xUnit family provide a mechanism to reload all classes at the beginning of each test run. It may appear as an option -- a check box labelled "Reload Classes" -- or it may be automatic. This feature keeps the fixture from becoming persistent due to being referenced from a class variable; it does not prevent the Fresh Fixture from becoming persistent if either the SUT or the test puts the fixture into the file system or a database.

Issues Caused by Persistent Fresh Fixtures

When fixtures are persistent we may find that subsequent runs of the same Test Method are trying to recreate a fixture that already exists. This may cause conflicts between the pre-existing and newly created resources; violating unique key constraints in the database is the most common example but it could be as simple as trying to create a file with the same name as one that already exists. One way to avoid these Unrepeatable Tests is to tear down the fixture at the end of each test; another is to avoid conflict by using Distinct Generated Values (see Generated Value on page X) for any identifiers that might cause conflicts.

Tearing Down Persistent Fresh Fixtures

Unlike fixture set up code which needs to help us understand the preconditions of the test, fixture tear down code is purely a matter of good housekeeping. It does not help us understand the behavior of the SUT but it has the potential to obscure the intent of the test or at least make it harder to understand. Therefore, the best kind of tear down code is the non-existant kind. We should avoid writing tear down code whenever we can. This is why Garbage-Collected Teardown (page X) is so preferable. Unfortunately, we cannot take advantage of using Garbage-Collected Teardown if our Fresh Fixture is persistent.

Hand-Coded TearDown

One way to ensure that the fixture is destroyed after we are done with it is to include test-specific teardown code within our Test Methods. This seems pretty simple but is in fact more complex than immediately meets the eye. Consider the following example:

   public void testGetFlightsByOriginAirport_NoFlights() throws Exception {
      // Fixture setup
      BigDecimal outboundAirport = createTestAirport("1OF");
      // Exercise System
      List flightsAtDestination1 = facade.getFlightsByOriginAirport(outboundAirport);
      // Verify Outcome
      assertEquals(0,flightsAtDestination1.size());
      facade.removeAirport(outboundAirport);
   }
Example NaiveJavaTeardown embedded from java/com/clrstream/ex6/services/test/InlineTeardownExampleTest.java

This Naive Inline Teardown (see Inline Teardown on page X) will tear down the fixture when the test passes but it won't tear down the fixture if the test fails or ends with an error. This is because the calls to the Assertion Methods (page X) throw exception so we may never make it to the tear down code. To ensure that the Inline Teardown code is always run we have to surround everything in the Test Method that might raise an exception. Here's the same test suitably modified:

   public void testGetFlightsByOriginAirport_NoFlights_td() throws Exception {
      // Fixture setup
      BigDecimal outboundAirport = createTestAirport("1OF");
      try {
         // Exercise System
         List flightsAtDestination1 = facade.getFlightsByOriginAirport(outboundAirport);
         // Verify Outcome
         assertEquals(0,flightsAtDestination1.size());
      } finally {
         facade.removeAirport(outboundAirport);
      }
   }
Example GuaranteedJavaTeardown embedded from java/com/clrstream/ex6/services/test/InlineTeardownExampleTest.java

Unfortunately, the mechanism to ensure that the tear down code is always run introduces a fair bit of complication into the Test Method. It gets even more complicated when there are several resources to clean up because an attempt to clean up one resource could fail and we want to ensure that the other resources are still cleaned up. We can address part of this by using Extract Method[Fowler] refactoring to move the tear down code into a Test Utility Method (page X) that we call from inside the error handling construct. This Delegated Teardown (see Inline Teardown) hides the complexity of dealing with tear down errors but we still need to ensure that the method gets called even when we have test errors or test failures.

Most members of the xUnit family solve this problem by supporting Implicit Teardown (page X). The Test Automation Framework (page X) calls a special tearDown method after each Test Method regardless of whether the test has passed or failed. This avoids the error handling code within the Test Method but it imposes two requirements on our tests. First, the fixture must be accessible from the tearDown method so we must use instance variables (preferred), class variables or global variables to hold the fixture. Second, we must ensure that the tearDown method works properly with each of the Test Methods regardless of the fixture they set up. (This is less of an issue with Testcase Class per Fixture (page X) since the fixture should always be the same; with other Testcase Class organizations we may need to include Teardown Guard Clauses (see Inline Teardown) within the tearDown method to ensure that it doesn't cause errors when it runs.)

Matching Setup With Teardown Code Organization

Given three ways of organising our set up code (Inline Setup (page X), Delegated Setup (page X) and Implicit Setup (page X)), and three ways to organize our tear down code ((Inline Teardown, Delegated Teardown and Implicit Teardown), we could have nine different combinations to decide when to use. It turns out to be a pretty simple decision. This is because it is not important for the tear down code to be visible to the test reader. We simply choose the most appropriate set up code organization and either the equivalent or more hidden version of tear down. For example, it is appropriate to use Implicit Teardown even with Inline Setup or Delegated Setup; it is almost never a good idea to use Inline Teardown with anything other than Inline Setup and even then it should probably be avoided!

Tear Down Mechanism
Set Up MechanismInline TearDownDelegated TearDownImplicit TearDown
Inline SetUpnot recommendedAcceptablerecommended
Delegated SetUpnot recommendedacceptablerecommended
Implicit SetUpnot recommendednot recommendedrecommended

Table X: The compatibility of various fixture setup and teardown strategies for persistent test fixtures.

Automated TearDown

The problem with hand-coded tear down is that it is extra work to write the tests and the tear down code is hard to get right and even harder to test. When we get it wrong it leads to Erratic Tests caused by Resource Leakage because the test that fails as a result often not the one that didn't clean up properly.

In languages that support garbage collection, tearing down a Fresh Fixture should be pretty much automatic. As long as our fixtures are only referenced by instance variables that go out of scope when our Testcase Object (page X) is destroyed, garbage collection will clean them up. Garbage collection won't work if we use class variables or if our fixtures include persistent objects such as files or database rows. In these cases we will need to do our own clean up.

This leads the lazy but creative programmer to come up with a way to automate the tear down logic. The important thing to note is that tear down code doesn't help us understand the test so it is better for it to be hidden (unlike set up code which is often very important for understanding the test.) We can remove the need to write custom hand-crafted tear down code for each Test Method or Testcase Class by building an Automated Teardown (page X) mechanism. It consists of the following parts:

  1. A well-tested mechanism to iterate over a list of objects that need to be deleted and catch/report any errors it encounters while ensuring that all the deletions are attempted.
  2. A dispatching mechanism that invokes the deletion code appropriate to the kind of object to be deleted. This is often implemented as a Command[GOF] object wrapping each object to be deleted but it could be as simple as a switching on the object class or calling a delete method on the object itself.
  3. A registration mechanism to add newly-created objects (suitably wrapped) to the list of objects to be deleted.

Once we have built this mechanism we can simply invoke the registration method from our Creation Methods (page X) and the clean up method from the tearDown method. The latter can be done in a Testcase Superclass (page X) that all our Testcase Classs inherit from. We can even extend this mechanism to delete objects created by the SUT as it is exercised by using an observable Object Factory (see Dependency Lookup on page X) inside the SUT and having our Testcase Superclass register itself as an Observer[GOF] of object creation.

Database TearDown

When our persistent Fresh Fixture has been built entirely in a relational database, we can use certain features of the database to implement our tear down. Table Truncation Teardown (page X) is a brute force way to blow away the entire contents of a table with a single database command. Of course, this is only appropriate when each Test Runner (page X) has its own Database Sandbox (page X). A somewhat less drastic approach is to use Transaction Rollback Teardown (page X) to undo all changes made within the context of the current test. This relies on the SUT having been designed using the Humble Transaction Controller (see Humble Object on page X) pattern so that we can invoke the business logic from the test without having the SUT commit the transaction automatically. Both these database-specific tear down patterns are most commonly implemented using Implicit Teardown to keep the tear down logic out of the Test Methods.

Avoiding the Need for TearDown

So far we have looked at ways to do fixture tear down. Now, let us look at ways to avoid fixture tear down.

Avoiding Fixture Collisions

The reason we need to do fixture tear down at all are three-fold:

  1. The accumulation of leftover fixture objects can cause tests to run slowly.
  2. The leftover fixture objects can cause the SUT to behave differently or our assertions to report incorrect results
  3. The leftover fixture objects can prevent us from creating the Fresh Fixture our test requires.

Each of these issues need be addressed. The easiest to address is the first one: We can schedule a periodic cleansing of the persistence mechanism back to a known, minimalist state. Unfortunately, this is only useful if we can get the tests to run correctly. The second issue can be addressed by using Delta Assertions (page X) rather than "absolute" assertions. These work by taking a snapshot of the fixture before the test is run and verifying that the expected differences have appeared after exercising the SUT.

The last issue can be addressed by ensuring that each test generates a different set of fixture objects each time it is run. This means that any objects that the test needs to create have to be given totally unique identifiers. By this I mean unique filenames, unique keys, etc. This can be done by building a simple unique ID generator and getting a new ID at the beginning of each test. We then use that ID as part of the identity of each newly created fixture object. If the fixture is shared beyond a single Test Runner, we may need to include something about the user in the unique identifiers we create;the currently logged-in user ID is usually sufficient. Using Distinct Generated Values as keys has another benefit: it allows us to implement a Database Partitioning Scheme (see Database Sandbox) that makes it possible to use absolute assertions despite the presence of leftover fixture objects.

Avoiding Fixture Persistence

We seem to be going to a lot of trouble to undo the side-effects caused by a persistent Fresh Fixture. Wouldn't it be nice if we could avoid all of this work? The good news is that we can. The bad news is that we need to make our Fresh Fixture non-persistent. When the SUT is to blame for the persistence of the fixture, one possibility is to replace the persistence mechanism with a Test Double (page X) that the test can wipe out at will. A good example of this is the use of a Fake Database (see Fake Object on page X). When the test is to blame for fixture persistence the solution is easier: just use a less persistent fixture reference mechanism.

Dealing with Slow Tests

A major drawback of using a Persistent Fresh Fixture is speed, or lack thereof. File systems and databases are much slower than the processers used in modern computers so tests that interact with the database tend to run much, much slower than tests that can run entirely in memory. Part of this is because the SUT is accessing the fixture from disk but this turns out to be only a small part of the slow down. Setting up the Fresh Fixture at the beginning of each test and tearing it down at the end of each test typically takes many more disk accesses than used by the SUT to access the fixture. The practical consequence of this is that tests that access the database often take fifty to one hundred times (This is two orders of magnitude!) as long to run as those that run entirely in memory, all other things being equal.

The typical reaction to having slow tests caused by Persistent Fresh Fixtures is to eliminate the fixture set up and tear down overhead by reusing the fixture across many tests. Assuming we have five disk accesses to set up and tear down the fixture for every disk access done by the SUT and the absolute best(Your mileage may vary!) we can do by switching to a Shared Fixture is somewhere around ten times as slow. This is still too slow in most situations and it comes with a hefty price: The tests are no longer independent. That means we will likely have Interacting Tests (see Erratic Test), Lonely Tests (see Erratic Test), Unrepeatable Tests on top of our Slow Tests!

A much better solution is to remove the need to have a disk-based database under the application. With a small amount of effort we should be able to replace the disk-based database with an In-Memory Database (see Fake Object) or a Fake Database. This is best done early in the project while the effort is still low. Yes, there are some challenges such as dealing with stored procedures but these are all surmountable.

This isn't the only way of dealing with Slow Tests; see the sidebar Faster Tests Without Shared Fixtures (page X) for other strategies.

Managing Shared Fixtures

Managing Shared Fixtures has a lot in common with managing persistent Fresh Fixtures except that we deliberately choose not to tear the fixture down after every test so that we can reuse it in subsequent tests. This implies two things. First, we must be able to access the fixture in the other tests and second, we must have a way of triggering the construction of the fixture and the tear down of it as well.



Sketch Shared Fixture embedded from Shared Fixture.gif

Fig. X: A Shared Fixture with two Test Methods that share it.

A Shared Fixture is set up once and used by two or more tests which may interact, either deliberately or accidently, as a result. Note the lack of a fixture setup phase of the second test.

Accessing Shared Fixtures

Regardless of how and when we choose to build the Shared Fixture, the tests need a way to find the test fixture they are to reuse. The choices available to us depend on the nature of the fixture. When the fixture is stored in a database (the most common usage of a Shared Fixture), tests may access it directly without having direct references to the fixture objects as long as they know about the database. There may be a temptation to use Hard-Coded Value (see Literal Value on page X) in database lookups to access the fixture objects. This is almost always a bad idea because it leads to a close coupling between tests and the fixture implementation as well as having poor documentation value (Obscure Test (page X).) We can avoid these issues by using Finder Method (see Test Utility Method) with Intent Revealing Names[SBPP] to access the fixture. These Finder Methods may have a very similar signature to Creation Methods but they return references to existing fixture objects rather than building brand new ones.

We have a range of possible solutions when the fixture is stored in memory. If all the tests that need to share the fixture are in the same Testcase Class, we can use a fixture holding class variable to hold the reference to the fixture. As long as we give the variable an Intent Revealing Name, the test reader should be able to understand the preconditions of the test. Another alternative is to use a Finder Method.

If we need to share the fixture across many Testcase Classes, we have to use more sophisticated techniques. We could, of course, let one class declare the fixture holding class variable and have the other tests access the fixture via that variable but this may create unnecessary coupling between the tests. Another alternative is to move the declaration to a well known object, a Test Fixture Registry (see Test Helper on page X). This Registry[PEAA] object could be something like a test database or it can merely be a class. It can expose various parts of a fixture via discrete fixture holding class variables or via Finder Methods. When the Test Fixture Registry has only Finder Methods that know how to access the objects but don't hold references to them, we call it a Test Helper.

Triggering Shared Fixture Construction

For a test fixture to be shared, it must be built before any Test Method needs it. This could be as late as right before the Test Method's logic is run, just before the entire test suite is run, or at some time earlier. This leads us to the basic patterns of Shared Fixture creation.



Sketch Shared Fixture Setup embedded from Shared Fixture Setup.gif

Fig. X: The plethora of ways to manage a Shared Fixture.

There are many strategies for deciding when to set up a Shared Fixture; the decision is based on how many tests need to reuse the fixture and how many times.

If we are happy with creating the test fixture the first time any test needs it, we can use Lazy Setup (page X) in the setUp method of the corresponding Testcase Class to create it as part of running the first test. Subsequent tests see that the fixture already exists and reuse it. Because there is no obvious signal that the last test in a test suite (or Suite of Suites (see Test Suite Object on page X)) has been run, we won't know when to tear down the fixture after each test run. This can lead to Unrepeatable Tests since the fixture may survive across test runs (depending on how the various tests access it.)

If we need to share the fixture more broadly, we could also include a Fixture Setup Testcase at the beginning of the test suite. This is a special case of Chained Tests and suffers from the same problem as Lazy Setup in that we don't know when it is time to tear down the fixture. It also depends on the ordering of tests within a suite so it works best with Test Enumeration (page X).

If we need to be able to teardown the test fixture after running a test suite, we need to use a fixture management mechanism that tells us when the last test has been run. Several members of the xUnit family supports the concept of a setUp method that is run just once for the test suite created from a single Testcase Class. This SuiteFixture Setup (page X) method has a corresponding tearDown method that is called when the last Test Method has finished running. (We can think of it as a built-in decorator for a single Testcase Class.) We can then guarantee that a new fixture is built for each test run. This ensures that the fixture is not left over to cause problems with subsequent test runs. This will prevent Unrepeatable Tests but will not prevent Interacting Tests within the test run. This capability could be added as an extension to any member of the xUnit family. When it isn't supported or when we need to share the fixture beyond a single Testcase Class we can resort to using a Setup Decorator (page X) to bracket the running of a test suite with execution of fixture setUp and tearDown logic. The biggest drawback of Setup Decorator is that tests that depend on the decorator cannot be run by themselves; they are Lonely Tests.

The final option is to build the fixture well before the tests are run -- a Prebuilt Fixture (page X). This approach has the most options for how the test fixture is actually constructed because it needn't be executable from within xUnit. For example, it could be set up manually, using database scripts, copying a "golden" database or by running some data generation program. The big disadvantage is that if any tests are Unrepeatable Test, we will need Manual Intervention (page X) before each test run. As a result, this is used in combination with a Fresh Fixture to construct an Immutable Shared Fixture (see Shared Fixture).

What's Next?

Now that we've determined how we will set up and tear down our fixtures we can turn our attention to exercising the SUT and verifying that the expected outcome has occured using calls to Assertion Methods. This is described in more detail in the Result Verification narrative chapter.



Page generated at Wed Feb 09 16:39:26 +1100 2011

Copyright © 2003-2008 Gerard Meszaros all rights reserved

All Categories
Introductory Narratives
Web Site Instructions
Code Refactorings
Database Patterns
DfT Patterns
External Patterns
Fixture Setup Patterns
Fixture Teardown Patterns
Front Matter
Glossary
Misc
References
Result Verification Patterns
Sidebars
Terminology
Test Double Patterns
Test Organization
Test Refactorings
Test Smells
Test Strategy
Tools
Value Patterns
XUnit Basics
xUnit Members
All "Introductory Narratives"
A Brief Tour
Test Smells
Goals of Test Automation
Philosophy Of Test Automation
Principles of Test Automation
Test Automation Strategy
XUnit Basics
Transient Fixture Management
Persistent Fixture Management
Result Verification
Using Test Doubles
Organizing Our Tests
Testing With Databases
A Roadmap to Effective Test Automation